![]() method and device for visual compensation
专利摘要:
METHOD AND DEVICE FOR VISUAL COMPENSATION. A method 300 and device for visual compensation captures 330 an image using an image generator, detects 360 whether glasses are present in the image, and sets 363 an electronic visual screen for a larger font size, if glasses are not detected as present in the Image. If glasses are detected as present in the image, the electronic visual screen is set to 367 for a normal font size. The method and device can be activated 320 (for example) by an incoming call or message, by a touchscreen activation, a key press, or by a detected movement of the device. The method can be repeated, from time to time, to detect if a user puts on the glasses (or removes glasses) after capturing the first image. The method and device compensate users with presbyopia (and some other types of visual impairments), who intermittently wear glasses. 公开号:BR112012015502B1 申请号:R112012015502-4 申请日:2010-12-21 公开日:2021-01-05 发明作者:William P. Alberth 申请人:Google Technology Holdings LLC; IPC主号:
专利说明:
FIELD This invention generally relates to electronic devices with visual presentations, and more particularly to text and image presentations to users who occasionally wear glasses. FUNDAMENTALS Some users of electronic devices with visual presentations suffer from presbyopia, which results in difficulty in focusing on nearby objects. This can produce eye strain when reading text or viewing images up close. Presbyopia is usually treated with corrective lenses, such as reading glasses. These glasses are often used intermittently. If a user unexpectedly needs to see an electronic visual presentation, reading glasses may not be available. For example, a cell phone rings and a user would like to read the caller ID, but their reading glasses are in another room or in a bag. Although many electronic devices with visual displays have settings that can be adjusted to display text using larger fonts, many times, a user must first navigate through a menu that is displayed using "normal" font sizes. So, ironically, the process of increasing font sizes to solve reading difficulties becomes painful. In addition, complications result when a user would sometimes like larger font sizes and sometimes would like smaller font sizes. Thus, there is an opportunity to accommodate users with presbyopia such that electronic visual presentations are easier to read. BRIEF DESCRIPTION OF THE DRAWINGS The attached figures, where similar reference numbers refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated into and form part of the specification, and serve to further illustrate modalities of concepts that include the claimed invention, and explain various principles and advantages of these modalities. Figures 1 and 2 show the use of a device for visual compensation according to a modality. Figure 3 shows a flow diagram for a visual compensation method according to a modality. Figure 4 shows a basic scheme for the device shown in Figures 1 and 2. Qualified craftsmen will appreciate that the elements in the figures are illustrated by simplicity and clarity and that they were not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures can be exaggerated in relation to other elements to help improve the understanding of the modalities of the present invention. The device and method components were eventually represented by conventional symbols in the drawings, showing only the specific details that are relevant to understanding the modalities of the present invention so as not to obscure the disclosure with details that will be easily evident to those with knowledge. current in the art having the benefit of the description here. DETAILED DESCRIPTION A method and device for visual compensation uses a digital camera to capture an image of a user of the device. A facial recognition mechanism on the device detects whether a face is present in the image and whether glasses are present or absent. If glasses are missing from the image, an electronic visual display on the device uses a larger font size than if glasses are detected. The method and device helps to visually compensate for a presbyopic user who occasionally wears glasses. If the user is wearing reading glasses, as detected in the image, then the screen uses a normal font size. If, however, reading glasses are missing from the image, then the screen uses a larger font to assist the user in reading the text. This method and device can also be useful for a user who has severe myopia. In another situation, a user who usually wears glasses places the mobile phone on a nightstand for use as a watch. When the user wakes up in the middle of the night and touches the cell phone to trigger a user interaction trigger, the method and device captures an image (in the dark, probably resulting in underexposure), fails to detect that the user is wearing glasses, and increases the font size of the clock screen to improve the readability of the indicated time. Figures 1 and 2 show the use of a device 100 for visual compensation according to an embodiment. The device 100 of Figures 1-2 is shown as a mobile phone, however, the device can be any number of devices with an electronic visual display 110 including computers (desktop or laptop), electronic books, personal digital assistants or organizers, or other electronic devices with visual displays intended for viewing at arm's length (ie 1 meter) or less. The device 100 includes a digital camera 120, or another type of image generator, directed in substantially the same direction as the screen 110. In some implementations, the image generator can be integrated on the screen itself or located behind the screen (towards inside the device 100). The camera captures an image when it receives a user interaction trigger. The trigger can be user interaction (for example) an incoming call or message, touch screen or key interaction, or a movement detected from device 100. As devices increase in sophistication, other user interaction triggers can be developed. Generally, a user interaction trigger is any signal that would cause the screen to be active and thus anticipate visual interaction between the device and the user. Fortunately, at the time when camera 120 is capturing the image, the user is in front of screen 110 and the image will include an acceptable representation of the user's face. A processor (not shown) with a simple facial recognition mechanism detects whether a face is present in the image. If the facial recognition engine cannot detect a face, the processor may attempt to capture a replacement image. If facial recognition detects a face, it determines whether glasses are present. In Figure 1, user 190 is not wearing glasses. In this situation, the facial recognition mechanism detected a face without glasses. In response, the processor sets the font size of the text displayed on the electronic visual screen 110 to a large font size. The specific font size can be previously selected by the user. In Figure 2, user 190 is wearing 195 glasses. In this situation, the facial recognition mechanism detects the face with glasses, and the text displayed on the electronic visual screen 110 is set to a normal font size previously selected by the user or pre- determined by a manufacturer or software application. The image capture, image analysis, and font size setting process can be repeated from time to time. If an initial user interaction trigger results in capturing an image of a face without glasses, the displayed font size will be set larger than normal. The user can subsequently put on the glasses, and a later image capture, image analysis, and the font configuration process would result in a normal font size on the screen. Repetition can be based on time (for example, every 60 seconds) while the screen is active, it can be based on additional user interaction triggers, or it can be based on a combination of time and additional user interaction triggers . The repetition time can vary (for example, from 15 seconds to 30 seconds to 60 seconds) and / or suspend (for example, after repeating for 5 minutes, the process is interrupted and the font size remains constant). The device 100 shown includes other components commonly found in mobile phones, such as an audio speaker 150, a keyboard 130, and a microphone 140. Of course, other components can be implemented in other versions of the device. Figure 3 shows a flow diagram 300 for a visual compensation method according to an embodiment. Initially, a visual compensation device (such as the device 100 in Figures 1 and 2) is defined as a visual compensation mode 310. The visual compensation mode was originally developed for users with presbyopia, but may also be appropriate for users with other types of visual problems (such as severe myopia) when the user sometimes wears glasses and sometimes does not wear glasses. In any case, the user can activate the mode as a device configuration and the device will remain in that mode until the visual compensation mode is deactivated. When the device receives a user interaction trigger 320, the image generator on the device is activated 325 (if not previously active) and captures an image 330. A user interaction trigger can be any signal that would cause the screen is active. Examples of user interaction triggers are: incoming call or message signals, touch screen interactions, key presses, or detected device movement using an accelerometer. In some situations, reception steps 320 and activation steps 325 are optional, because capture step 330 is automatic when entering visual compensation mode 310. In a mobile phone implementation, however, energy savings can be achieved by only capture 330 images under certain circumstances. After an image is captured, a facial recognition mechanism implemented in a device processor performs a basic facial recognition algorithm on the image. The facial recognition mechanism determines 350 if a face is detected in the image. If no face is present (or no face can be detected), the flow returns to the step of capturing an image 330 to obtain a replacement image. If a face is present, the facial recognition mechanism determines 360 whether glasses were detected on the face. If no glasses are detected, the processor 363 sets the screen font size to a large font size. This is beneficial when a presbyopic user received a phone call or message, and picked up the device without first putting on the glasses. Note that if no face was detected 350, the stream can go directly to set 3 63 the screen font size to large. This alternate path can be useful in situations where the image is too dark (or too light or too blurry) to detect a face, or the camera captured an image before the user was positioned to look at the screen. In such situations, the device defaults to a large font size and can later be changed to a normal font size during a later iteration through flowchart 300 or via a specific user command. If the user is wearing glasses at the time the image is taken, the 367 processor sets the screen font size to a normal font size. After the screen font size is set 363, 367, an optional timer is checked. The timer is set 335 right after capturing 330 of the image. The timer can be constant (for example, 60 seconds) or variable (first 15 seconds, then 30 seconds, then 30 seconds, then 60 seconds, etc.). After timer 370 has elapsed, the stream returns to capture an image 330 and the process is repeated as needed. Thus, when a user puts on or removes reading glasses, the font size can change dynamically based on whether images captured at various times are detected including faces using glasses. Some implementations may assume that, after glasses are put on, the user will not remove them. In this case, the process can end after setting the screen font size to 67 normal. Some implementations may assume that if any changes in the results of image capture and image analysis occurred in a predetermined number of iterations (for example, 10 iterations) or after a predetermined period of time (for example, 10 minutes), no change additional is expected. In that case, the process may end after the predetermined number of iterations or lapses of predetermined time. Figure 4 shows a basic scheme for device 100 shown in Figures 1 and 2, which is capable of implementing the flow diagram shown in Figure 3. As mentioned earlier, the implementation shown is for a mobile phone, and therefore the device 100 includes an antenna 495 and at least one transceiver 490 for receiving and transmitting wireless signals. The device also includes at least one processor 460 electrically coupled to transceiver 490 to receive (among other things) incoming call signals and received message signals. The 460 processor is electrically coupled to a memory 480, which can be read-only memory, random access memory, and / or other types of memory to store operating systems and software applications, as well as user-level data. A 475 power supply supports the 460 processor (and other device components 100) as needed. A visual electronic display 410 is attached to the 460 processor, and the processor controls the display output. The screen can be an output screen only or it can be a touch screen or other type of screen that also accepts input. An image generator 420 is also coupled to the processor 460. The image generator can be a digital camera facing the same direction as the screen to capture an image of a user looking at the screen. A facial recognition mechanism 465 implemented in the 460 processor analyzes the captured image for a face and whether the face is wearing glasses. An accelerometer 470 is useful for detecting user interaction with device 100. If a user picks up the device in anticipation of dialing a phone number or creating a message, the accelerometer can detect movement and trigger the process described in Figure 3. Other sensors or input components, (such as keys on a keyboard 43 0, the touch sensor on a touch screen or touch pad, or even the activation of a microphone 44 0 or the audio speaker 450) can also be used to drive flow diagram 300 of Figure 3. The visual compensation method and device provides a practical way to dynamically switch font sizes, which is useful for users with presbyopia and some other types of visual impairment. With a digital camera and a simple facial recognition mechanism, faces with or without glasses can be detected by the device. If the device detects a face of glasses, corrected vision is assumed and the screen uses a normal font. If the device detects a face without wearing glasses (or cannot detect a face), uncorrected vision is assumed and the screen uses a larger font. The image capture and image analysis process can be repeated from time to time to dynamically change the font size as glasses are placed (or removed). In the previous specification, specific modalities have been described. However, one of ordinary skill in the art recognizes that various modifications and changes can be made without departing from the scope of the invention as defined in the claims below. Thus, the specification and figures should be considered in an illustrative rather than a restrictive sense, and all modifications are intended to be included in the scope of the present teachings. The benefits, advantages, solutions to problems, and any element (s) that may make any benefit, advantage or solution occur or become more pronounced should not be construed as a critical, necessary, or essential resource or elements of any or all of the claims. The invention is defined solely by the appended claims, including any changes made pending this application and all equivalents of such claims as issued. The terms "comprises", "comprising", "has", "having", "includes", "including", "contains", "containing" or any other variation thereof, are intended to cover a non-exclusive inclusion, of so that a process, method, article or device that comprises, has, includes, contains a list of elements not only includes those elements, but may include other elements that are not expressly listed or inherent in such a process, method, article or apparatus. An element preceded by "comprises ... one", "has ... one", "includes ... one", "contains... A" no, without further restrictions, prevents the existence of other identical elements in the process , method, article or device that comprises, has, includes, contains the element. The terms "a" and "one" are defined as one or more unless otherwise specified. The terms "substantially", "essentially", "approximately", "about" or any other version thereof, are defined as being close to how it is understood by a person skilled in the art, and in a non-limiting modality the terms are defined to be within 10%, in another modality within 5%, in another modality within 1% and in another modality within 0.5%. The term "coupled" as used herein is defined as connected, although not necessarily directly, and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way, but it can also be configured in ways that are not listed. It will be appreciated that some modalities may be composed of one or more generic or specialized processors (or "processing devices") such as microprocessors, digital signal processors, custom processors and FPGA programmable field logic devices) and unique stored program instructions ( including software and firmware) that control one or more application processors, in conjunction with certain non-processor circuits, some, most or all of the method and / or device functions described herein. Alternatively, some or all of the functions can be implemented by a state machine that has no stored program instructions, or in one or more application-specific integrated circuits (ASIC), where each function or some combinations of some of the functions are implemented as custom logic. Of course, a combination of the two approaches can be used. In addition, a modality can be implemented as a computer-readable storage medium having computer-readable code stored therein to program a computer (for example, comprising a processor) to carry out a method as described and claimed herein. Examples of such computer-readable storage media include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read-Only Memory), a PROM ( Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory) and a Flash memory. In addition, a normal expert is expected, despite significant effort and possibly many project choices motivated by, for example, time available, current technology, and economic considerations, when guided by the concepts and principles described here will be readily able to generate such software instructions and programs and CIs with minimal experimentation. The Disclosure Summary is provided to allow the reader to quickly determine the nature of the technical disclosure. It is presented with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that several characteristics are grouped in various modalities in order to rationalize the disclosure. This method of disclosure should not be interpreted as reflecting an intention that the claimed modalities require more resources than are expressly recited in each claim. On the contrary, as the following claims reflect, the inventive subject is found in less than all the resources of a single disclosed modality. Thus, the following claims are incorporated into the detailed description, with each claim standing on its own as a separate claimed subject.
权利要求:
Claims (18) [0001] 1. Method for visual compensation (310) characterized by the fact that it includes: capturing (330) an image using an image generator (120, 420); detecting (350, 360) whether glasses (195) are present on one face in the image; and defining (363) an electronic visual screen (110, 410) for a larger font size, if glasses are not detected as present on the face in the image. [0002] 2. Method, according to claim 1, characterized by the fact that it further comprises: defining (367) an electronic visual screen (110, 410) for a normal font size if glasses (195) are detected as present on the face in Image. [0003] 3. Method, according to claim 1, characterized by the fact that it also comprises: receiving (320) a user interaction trigger before capture. [0004] 4. Method according to claim 3, characterized by the fact that the user interaction trigger is an incoming call or message. [0005] 5. Method, according to claim 3, characterized by the fact that the user interaction trigger is a touch screen activation. [0006] 6. Method, according to claim 5, characterized by the fact that the activation of the touch screen is an unlock command. [0007] 7. Method, according to claim 3, characterized by the fact that the user interaction trigger is a detection of the acceleration of the electronic visual screen (110, 410). [0008] 8. Method, according to claim 3, characterized by the fact that the user interaction trigger is a detection of a pressed key. [0009] 9. Method, according to claim 1, characterized by the fact that it further comprises: activating (325) a digital image generator (120, 420) before capture. [0010] 10. Method, according to claim 1, characterized by the fact that it also comprises: aligning the image generator (120, 420) to face the same direction as the electronic visual screen (110, 410) before capture. [0011] 11. Visual compensation device (100) characterized by the fact that it includes: an image generator (120, 420), to capture an image; a processor (460), coupled to the image generator (120, 420), the processor being configured to detect (360) if glasses (195) are on one side of the image and define (363) a font size for large if glasses are not detected on the face in the image; and an electronic visual screen (110, 410), coupled to the processor (460), to display text using the font size set to large. [0012] 12. Device according to claim 11, characterized by the fact that the processor (460) is additionally configured to: set (367) the font size to normal if glasses (195) are detected in the image. [0013] Device according to claim 11, characterized in that it further comprises: a receiver (490), coupled to the processor (460), for receiving an incoming call or message signal. [0014] 14. Device according to claim 11, characterized by the fact that it further comprises: an accelerometer (470), coupled to the processor (460), for detecting a movement of the device. [0015] 15. Device, according to claim 11, characterized by the fact that it further comprises: a memory (480), coupled to the processor (460), for storing the font size definition for large. [0016] 16. Device according to claim 15, characterized by the fact that the memory (480) also stores the font size setting for normal. [0017] 17. Device according to claim 11, further characterized by the fact that the processor (460) comprises: a facial recognition mechanism (465) to detect whether glasses (195) are on one face in the image. [0018] 18. Device according to claim 11, characterized by the fact that it further comprises: a key (130, 430).
类似技术:
公开号 | 公开日 | 专利标题 BR112012015502B1|2021-01-05|method and device for visual compensation CN107643921B|2021-04-27|Apparatus, method, and computer-readable storage medium for activating a voice assistant US20180314536A1|2018-11-01|Method and apparatus for invoking function in application KR20200106482A|2020-09-14|Method and apparatus for wireless charging JP5689537B2|2015-03-25|Movable touch generation device, system, bidirectional communication method, and computer program medium | KR102264710B1|2021-06-16|Display driving method, display driver integrated circuit, and an electronic device comprising thoseof CN105575368B|2018-07-31|A kind of method and device adjusting brightness of display screen US10691920B2|2020-06-23|Information image display method and apparatus US20150215483A1|2015-07-30|Systems, Devices, and/or Methods for Managing Photography US20190251243A1|2019-08-15|Verification code generation to improve recognition accuracy by a person and recognition difficulty by a computer program TW201508732A|2015-03-01|Display backlight adjustment method, display backlight adjustment apparatus and computer program product for adjusting display backlight KR20170054942A|2017-05-18|Method for user authentication and electronic device implementing the same EP3367302A1|2018-08-29|Electronic device and method for identifying sensor position by using pixels of display TW201339860A|2013-10-01|System and method for dynamically adjusting font size of electronic device US20160162002A1|2016-06-09|Portable electronic device and power control method therefor KR102304693B1|2021-09-28|Method for controlling camera system, electronic apparatus and storage medium KR102183857B1|2020-11-27|Method for controlling wearable computing device and system thereof TWI525405B|2016-03-11|Function switch system for smart watch and method thereof JP2015138409A|2015-07-30|Image management device, image management method, and program TWI719026B|2021-02-21|Method and system for controlling usage of electronic device and electronic device TW201114252A|2011-04-16|Photographing driving system and the electronic device using the same JP5939617B2|2016-06-22|Portable information terminal, portable information terminal control method, and portable information terminal control program US10468022B2|2019-11-05|Multi mode voice assistant for the hearing disabled JP6766291B2|2020-10-07|Imaging control device, imaging device, imaging control method, imaging control program KR102373491B1|2022-03-11|Method for sensing a rotation of rotation member and an electronic device thereof
同族专利:
公开号 | 公开日 WO2011079093A1|2011-06-30| KR101367060B1|2014-02-24| EP2711810B1|2015-09-02| BR112012015502A2|2016-05-03| US20110149059A1|2011-06-23| RU2012131230A|2014-01-27| CN102754046A|2012-10-24| AU2010333769A1|2012-07-12| US9104233B2|2015-08-11| US8305433B2|2012-11-06| RU2550540C2|2015-05-10| KR20120096527A|2012-08-30| EP2517087A1|2012-10-31| AU2010333769B2|2014-01-30| EP2517087B1|2013-11-20| EP2711810A1|2014-03-26| CN107102720A|2017-08-29| US20120268366A1|2012-10-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6819783B2|1996-09-04|2004-11-16|Centerframe, Llc|Obtaining person-specific images in a public venue| US6603491B2|2000-05-26|2003-08-05|Jerome H. Lemelson|System and methods for controlling automatic scrolling of information on a display or screen| EP1426919A1|2002-12-02|2004-06-09|Sony International GmbH|Method for operating a display device| US20040246272A1|2003-02-10|2004-12-09|Artoun Ramian|Visual magnification apparatus and method| US20050054381A1|2003-09-05|2005-03-10|Samsung Electronics Co., Ltd.|Proactive user interface| KR100680191B1|2003-09-05|2007-02-08|삼성전자주식회사|Proactive user interface system with empathized agent| CN100542260C|2005-08-23|2009-09-16|凌阳科技股份有限公司|A kind of method and intelligence controlling device thereof that TV is carried out Based Intelligent Control| JP4445454B2|2005-10-20|2010-04-07|アイシン精機株式会社|Face center position detection device, face center position detection method, and program| US7517086B1|2006-03-16|2009-04-14|Adobe Systems Incorporated|Compensating for defects in human vision while displaying text and computer graphics objects on a computer output device| US20080002964A1|2006-06-30|2008-01-03|Motorola, Inc.|Method and system for a digital reading glass| CN100477692C|2006-10-13|2009-04-08|中山大学|Automatic adjusting method and adjusting system for mobile phone picture| US8209635B2|2007-12-20|2012-06-26|Sony Mobile Communications Ab|System and method for dynamically changing a display| CN101515959B|2008-02-18|2011-08-31|纬创资通股份有限公司|Mobile phone and operational method thereof| US8734247B2|2008-05-21|2014-05-27|Igt|Systems, methods, and apparatus for controlling a gaming machine display| CN101301236B|2008-06-27|2011-02-16|北京中星微电子有限公司|Eyesight protection system based on three-dimensional camera shooting and method| CN101458560B|2008-12-25|2011-06-29|南京壹进制信息技术有限公司|Computer intelligent energy-conserving method|US20110298829A1|2010-06-04|2011-12-08|Sony Computer Entertainment Inc.|Selecting View Orientation in Portable Device via Image Analysis| US20120210277A1|2011-02-11|2012-08-16|International Business Machines Corporation|Usage based screen management| US8881058B2|2011-04-01|2014-11-04|Arthur Austin Ollivierre|System and method for displaying objects in a user interface based on a visual acuity of a viewer| US20130002722A1|2011-07-01|2013-01-03|Krimon Yuri I|Adaptive text font and image adjustments in smart handheld devices for improved usability| US20130286049A1|2011-12-20|2013-10-31|Heng Yang|Automatic adjustment of display image using face detection| US20130321617A1|2012-05-30|2013-12-05|Doron Lehmann|Adaptive font size mechanism| US9245497B2|2012-11-01|2016-01-26|Google Technology Holdings LLC|Systems and methods for configuring the display resolution of an electronic device based on distance and user presbyopia| US20140137054A1|2012-11-14|2014-05-15|Ebay Inc.|Automatic adjustment of font on a visual display| US9016857B2|2012-12-06|2015-04-28|Microsoft Technology Licensing, Llc|Multi-touch interactions on eyewear| EP2770403A1|2013-02-25|2014-08-27|BlackBerry Limited|Device with glasses mode| JP6307805B2|2013-07-24|2018-04-11|富士通株式会社|Image processing apparatus, electronic device, spectacle characteristic determination method, and spectacle characteristic determination program| CN103531094A|2013-09-22|2014-01-22|明基材料有限公司|Display system and method capable of adjusting image focal length automatically| WO2015044830A1|2013-09-27|2015-04-02|Visuality Imaging Ltd|Methods and system for improving the readability of text displayed on an electronic device screen| CN103648042A|2013-11-27|2014-03-19|乐视致新电子科技(天津)有限公司|Character output control method and device| CN103731745A|2013-11-27|2014-04-16|乐视致新电子科技(天津)有限公司|Control method and device for outputting characters| JP2016057855A|2014-09-10|2016-04-21|Necパーソナルコンピュータ株式会社|Information processor, information processing system, and program| CN105653016A|2014-11-14|2016-06-08|阿尔卡特朗讯|Method and device for automatically adjusting presentation style| US9367129B1|2015-02-05|2016-06-14|Wipro Limited|Method and system for controlling display of content to user| CA2901477A1|2015-08-25|2017-02-25|Evolution Optiks Limited|Vision correction system, method and graphical user interface for implementation on electronic devices having a graphical display| CN105808190B|2016-03-14|2019-01-25|Oppo广东移动通信有限公司|A kind of display control method and terminal device of display screen| CN106339086B|2016-08-26|2018-03-02|珠海格力电器股份有限公司|One kind regulation screen font method, apparatus and electronic equipment| KR20180083185A|2017-01-12|2018-07-20|삼성전자주식회사|Apparatus and method for providing adaptive user interface| CN106980448A|2017-02-20|2017-07-25|上海与德通讯技术有限公司|A kind of display methods and mobile terminal| CN107818569B|2017-11-09|2018-07-17|广州爱尔眼科医院有限公司|A kind of vision impairment monitoring alarm method| US10776135B2|2017-11-20|2020-09-15|International Business Machines Corporation|Automated setting customization using real-time user data| CN108514421A|2018-03-30|2018-09-11|福建幸福家园投资管理有限公司|The method for promoting mixed reality and routine health monitoring| US10636116B1|2018-10-22|2020-04-28|Evolution Optiks Limited|Light field display, adjusted pixel rendering method therefor, and vision correction system and method using same| US10761604B2|2018-10-22|2020-09-01|Evolution Optiks Limited|Light field vision testing device, adjusted pixel rendering method therefor, and vision testing system and method using same| US10936064B2|2018-10-22|2021-03-02|Evolution Optiks Limited|Light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same addressing astigmatism or similar conditions| US10860099B2|2018-10-22|2020-12-08|Evolution Optiks Limited|Light field display, adjusted pixel rendering method therefor, and adjusted vision perception system and method using same addressing astigmatism or similar conditions| CA3021636A1|2018-10-22|2020-04-22|Evolution Optiks Limited|Light field display, adjusted pixel rendering method therefor, and vision correction system and method using same|
法律状态:
2016-11-16| B25D| Requested change of name of applicant approved|Owner name: MOTOROLA MOBILITY LLC (US) | 2016-11-29| B25G| Requested change of headquarter approved|Owner name: MOTOROLA MOBILITY LLC (US) | 2016-12-13| B25A| Requested transfer of rights approved|Owner name: GOOGLE TECHNOLOGY HOLDINGS LLC (US) | 2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law| 2019-08-06| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure| 2020-11-03| B09A| Decision: intention to grant| 2021-01-05| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 05/01/2021, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US12/645,557|US8305433B2|2009-12-23|2009-12-23|Method and device for visual compensation| US12/645,557|2009-12-23| PCT/US2010/061425|WO2011079093A1|2009-12-23|2010-12-21|Method and device for visual compensation| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|